In: Political research quarterly: PRQ ; official journal of the Western Political Science Association and other associations, Band 68, Heft 4, S. 651-664
How do messages from political elites interact with individual traits of citizens to spur intergroup aggression? Building on research in social psychology, we expect that in places of protracted conflict, violent rhetoric from elites will be enough to mobilize antagonism toward an outgroup, especially among those who are generally less apt to be hostile toward the outgroup. We present results from two large survey experiments, the first conducted with young Jewish-Israeli adults across Israel and the second with a nationally diverse sample of adults in India. The results show that mild "fighting" words (e.g. "battle," "fight"), combined with a reference to the outgroup, provoke significantly greater support for policies that harm the outgroup among some citizens. This effect is largest among individuals low in outgroup prejudice and low in aggressive personality traits, people who are usually less inclined to support policies that hurt the outgroup. Effects of violent rhetoric persist even with policies and rhetoric to help the outgroup. This work highlights the importance of considering both individual traits and contextual factors together to understand their full impact in the study of intergroup conflict.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 31, Heft 3, S. 337-351
AbstractWe propose and explore the possibility that language models can be studied as effective proxies for specific human subpopulations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the "algorithmic bias" within one such tool—the GPT-3 language model—is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this propertyalgorithmic fidelityand explore its extent in GPT-3. We create "silicon samples" by conditioning the model on thousands of sociodemographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and sociocultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.